Introduction to Open Data Science - Course Project

About the project

Write a short description about the course and add a link to your GitHub repository here. This is an R Markdown (.Rmd) file so you should use R Markdown syntax.

The course seems to be set up nicely. At least much better than some online courses about statistics/programming. I hope I’ll relearn to use R (briefly tried it about twelve years ago). RStudio seems convenient and I like the GitHub integration. Additionally, I hope to deepen my understanding of statistical methods.

My GitHub repository: https://github.com/MTurkkila/IODS-projectlink


Regression and model validation

I started with the Datacamp and afterwards th data wrangling was straightforward and easy. Thus I used the data file from my local folder in the analysis. Below are the important parts of the script. Before any analysis, ggplot2 and GGally libraries are accessed first as is the common practice.

library(ggplot2)
library(GGally)

Read the data from a local folder and check for structure and dimensions

students2014 <- read.table("data/students2014.txt", sep="\t")
str(students2014)
## 'data.frame':    166 obs. of  7 variables:
##  $ gender  : Factor w/ 2 levels "F","M": 1 2 1 2 2 1 2 1 2 1 ...
##  $ Age     : int  53 55 49 53 49 38 50 37 37 42 ...
##  $ Attitude: int  37 31 25 35 37 38 35 29 38 21 ...
##  $ deep    : num  3.58 2.92 3.5 3.5 3.67 ...
##  $ stra    : num  3.38 2.75 3.62 3.12 3.62 ...
##  $ surf    : num  2.58 3.17 2.25 2.25 2.83 ...
##  $ Points  : int  25 12 24 10 22 21 21 31 24 26 ...
dim(students2014)
## [1] 166   7

The Data has 7 variables and 166 observations from a questionnaire. Variables deep, stra and surf are sum variables scaled to the original scale. The data only includes observations with points over 0.

Next I use the ggpair-function to show graphical overview of the data. In the actual script, I also save the plots in a local folder plots with dev.copy()-function.

p <- ggpairs(students2014, mapping = aes(col = gender, alpha = 0.3), lower = list(combo = wrap("facethist", bins = 20)))
p

In addition to plotting the dataframe, I checked the summaries of the data.

summary(students2014)
##  gender       Age           Attitude          deep            stra      
##  F:110   Min.   :17.00   Min.   :14.00   Min.   :1.583   Min.   :1.250  
##  M: 56   1st Qu.:21.00   1st Qu.:26.00   1st Qu.:3.333   1st Qu.:2.625  
##          Median :22.00   Median :32.00   Median :3.667   Median :3.188  
##          Mean   :25.51   Mean   :31.43   Mean   :3.680   Mean   :3.121  
##          3rd Qu.:27.00   3rd Qu.:37.00   3rd Qu.:4.083   3rd Qu.:3.625  
##          Max.   :55.00   Max.   :50.00   Max.   :4.917   Max.   :5.000  
##       surf           Points     
##  Min.   :1.583   Min.   : 7.00  
##  1st Qu.:2.417   1st Qu.:19.00  
##  Median :2.833   Median :23.00  
##  Mean   :2.787   Mean   :22.72  
##  3rd Qu.:3.167   3rd Qu.:27.75  
##  Max.   :4.333   Max.   :33.00

As attitude, stra and surf have the highest absolute correlation with points I chose those for the initial linear model and created a regression model with multiple explanatory variables and print the summary of my model.

my_model <- lm(Points ~ Attitude + stra + surf, data = students2014)
summary(my_model)
## 
## Call:
## lm(formula = Points ~ Attitude + stra + surf, data = students2014)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -17.1550  -3.4346   0.5156   3.6401  10.8952 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 11.01711    3.68375   2.991  0.00322 ** 
## Attitude     0.33952    0.05741   5.913 1.93e-08 ***
## stra         0.85313    0.54159   1.575  0.11716    
## surf        -0.58607    0.80138  -0.731  0.46563    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.296 on 162 degrees of freedom
## Multiple R-squared:  0.2074, Adjusted R-squared:  0.1927 
## F-statistic: 14.13 on 3 and 162 DF,  p-value: 3.156e-08

The summary shows estimations for the parameters of the linear model and the statistical significance for those estimates. Only attitude is statistically significant (shown with the stars) and therefore I only include it to the second model

my_model2 <- lm(Points ~ Attitude, data = students2014)
summary(my_model2)
## 
## Call:
## lm(formula = Points ~ Attitude, data = students2014)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -16.9763  -3.2119   0.4339   4.1534  10.6645 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 11.63715    1.83035   6.358 1.95e-09 ***
## Attitude     0.35255    0.05674   6.214 4.12e-09 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.32 on 164 degrees of freedom
## Multiple R-squared:  0.1906, Adjusted R-squared:  0.1856 
## F-statistic: 38.61 on 1 and 164 DF,  p-value: 4.119e-09

The summaries show that now both \(\alpha\) and \(\beta\) variables are highly statistically significant. The estimates show that attitude has positive correlation with the attitude i.e. more positive attitude yields more points.

The multiple R squared shows how much of the variation in the points is explains by the attitude. In this model, it is 19%

Check for validity with diagnostic plots

par(mfrow = c(2,2))
plot(my_model2, which = c(1, 2, 5))

The model assumes that errors are normally distributed, are not correlated and have constant variance.

Overall, I’d say the model is quite reasonable.


Logistic regression

The Data

The data includes background information and alcohol consumption of Portuguese students. The data is joined from two original data sets of student performance in math and in Portuguese language. The joined data includes two new variables: alc_use and high_use. alc_use is mean of weekend alcohol use (Walc) and workday alcohol use (Dalc). Variable high_use is boolean value with condition: high_use > 2.

Below is list of the variables in the data. The complete description of original data can be found at UCI Machine Learning repository

alc <- read.csv("data/alc.csv")
variable.names(alc)
##  [1] "school"     "sex"        "age"        "address"    "famsize"   
##  [6] "Pstatus"    "Medu"       "Fedu"       "Mjob"       "Fjob"      
## [11] "reason"     "nursery"    "internet"   "guardian"   "traveltime"
## [16] "studytime"  "failures"   "schoolsup"  "famsup"     "paid"      
## [21] "activities" "higher"     "romantic"   "famrel"     "freetime"  
## [26] "goout"      "Dalc"       "Walc"       "health"     "absences"  
## [31] "G1"         "G2"         "G3"         "alc_use"    "high_use"

The Analysis

The purpose of this analysis is to examine relationship between alcohol consumption and free time. Four variables related to the free time and thus chosen for the analysis are:

  • freetime - free time after school
  • traveltime - home to school travel time
  • goout - going out with friends
  • activities - extra-curricular activities

The hypothesis is that more free time and going out a student has greater the chance for high alcohol consumption.

Firstly, to study distributions of the chosen variables, let’s draw bar plots:

library(tidyr); library(dplyr); library(ggplot2)
## 
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
alc4 <- c("freetime", "traveltime", "goout","activities")
gather(alc[alc4]) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_bar()

Free time and going out seems to be quite normally distributed. Home to school travel time is heavily skewed to short, under 15 minutes travel times. Extra-curricular activities is split quite evenly. Thus, it can be assumed that variables free time and going out has more significance in alcohol consumption.

Let’s also do simple cross-tabulation of high alcohol consumption with median of free time and going out.

alc %>% group_by(high_use) %>% summarise(count = n(), going_out = median(goout), freetime = median(freetime))
## `summarise()` ungrouping output (override with `.groups` argument)
## # A tibble: 2 x 4
##   high_use count going_out freetime
##   <lgl>    <int>     <dbl>    <dbl>
## 1 FALSE      268         3        3
## 2 TRUE       114         4        3

Median for going out is higher for the group that goes out more. It might be that free time in itself does not affect alcohol consumption as much. However let’s start the logistic model with all four variables as is instructed.

Logistic model

To build the logistic model I use glm-function with all four variables and print the summary of the model

m <- glm(high_use ~ freetime + traveltime + goout + activities, data = alc, family = "binomial")
summary(m)
## 
## Call:
## glm(formula = high_use ~ freetime + traveltime + goout + activities, 
##     family = "binomial", data = alc)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.5791  -0.7737  -0.5840   0.9795   2.3555  
## 
## Coefficients:
##               Estimate Std. Error z value Pr(>|z|)    
## (Intercept)    -4.1598     0.6076  -6.847 7.56e-12 ***
## freetime        0.1524     0.1340   1.137   0.2553    
## traveltime      0.4226     0.1723   2.452   0.0142 *  
## goout           0.7225     0.1213   5.959 2.54e-09 ***
## activitiesyes  -0.3633     0.2446  -1.485   0.1376    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 465.68  on 381  degrees of freedom
## Residual deviance: 406.48  on 377  degrees of freedom
## AIC: 416.48
## 
## Number of Fisher Scoring iterations: 4

As expected, going out is significant predictor for high alcohol consumption. Surprisingly, also travel time has statistical signifigance. Let’s build new model with these two variables.

m <- glm(high_use ~ traveltime + goout, data = alc, family = "binomial")
summary(m)
## 
## Call:
## glm(formula = high_use ~ traveltime + goout, family = "binomial", 
##     data = alc)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.4676  -0.8436  -0.6050   1.0674   2.3813  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept)  -3.9443     0.4973  -7.932 2.16e-15 ***
## traveltime    0.4142     0.1712   2.419   0.0156 *  
## goout         0.7553     0.1165   6.485 8.87e-11 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 465.68  on 381  degrees of freedom
## Residual deviance: 409.76  on 379  degrees of freedom
## AIC: 415.76
## 
## Number of Fisher Scoring iterations: 4

Compute odds ratios OR and confidence intervals CI and print them.

OR <- coef(m) %>% exp
CI <- confint(m) %>% exp
cbind(OR, CI)
##                     OR       2.5 %     97.5 %
## (Intercept) 0.01936554 0.007015459 0.04951509
## traveltime  1.51309541 1.083767881 2.12550643
## goout       2.12821744 1.703804423 2.69227727

As the odds ratio is higher that 1 for both variables, it means that both variables are positively associated with the high alcohol consumption.

Predictions from the model

To explore the predictive power of the logistic model m, I use predict() function to make predictions for high alcohol consumption. The probabilities from the predictions are added to the alc dataframe as well as boolean value with the same condition, >0.5 as the observational data. Finally we print cross tabulation of predictions and actual values.

probabilities <- predict(m, type = "response")
alc <- mutate(alc, probability = probabilities)
alc <- mutate(alc, prediction = probability > 0.5)
table(high_use = alc$high_use, prediction = alc$prediction)
##         prediction
## high_use FALSE TRUE
##    FALSE   247   21
##    TRUE     78   36

Let’s also plot the actual values and the predictions

g <- ggplot(alc, aes(x = probability, y = high_use, col = prediction))
g + geom_point()

Looks like there are quite many false predictions so let’s check the training error by first defining loss function

loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}

Next we call that loss function to compute percentage of wrong predictions

loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2591623

The result shows that around 26% of the predictions are wrong. The model is not very good at predictions, but it still is somewhat better than simply fifty-fifty guessing.

Cross-validation

Let’s use cv.glm() function form ‘boot’ library for K-fold cross-validation and start with 10-fold cross-validation. The function returns vector ‘delta’ which first component is the estimate for prediction error. The cv.glm() function uses the previously defined loss functions as the cost function.

library(boot)
cv <- cv.glm(data = alc, cost = loss_func, glmfit = m, K = 10)
cv$delta[1]
## [1] 0.2617801

The predictor error is slightly higher than the training error and it is also the close to the error in the datacamp exercise. Let’s check if it is possible to find logistic model with smaller error.
The next bit of code uses ten different logistic regression models with different number of predictor variables. In a for loop prediction probabilities are computed for each model. Additionally, 10-fold cross-validation is performed. Finally, training and prediction errors are plotted against number of predictors

# Dataframe for training and prediction errors
errors <- data.frame(matrix(ncol=3,nrow=0, dimnames=list(NULL, c("n_pred", "training", "prediction"))))

# Models with different number of predictors
m1 <- high_use ~ goout + traveltime + freetime + activities + studytime + paid + romantic+G1+G2+G3
m2 <- high_use ~ goout + traveltime + freetime + activities + studytime + paid + romantic+G1+G2
m3 <- high_use ~ goout + traveltime + freetime + activities + studytime + paid + romantic+G1
m4 <- high_use ~ goout + traveltime + freetime + activities + studytime + paid + romantic
m5 <- high_use ~ goout + traveltime + freetime + activities + studytime + paid
m6 <- high_use ~ goout + traveltime + freetime + activities + studytime
m7 <- high_use ~ goout + traveltime + freetime + activities # The original model
m8 <- high_use ~ goout + traveltime + freetime
m9 <- high_use ~ goout + traveltime # The updated model used in assignments
m10 <- high_use ~ goout

for(i in 1:10) {
  tmp <- paste0("m",i)
  m <- glm(tmp, data = alc, family = "binomial")
  probabilities <- predict(m, type = "response")
  alc <- mutate(alc, probability = probabilities)
  alc <- mutate(alc, prediction = probability > 0.5)
  cv <- cv.glm(data = alc, cost = loss_func, glmfit = m, K = 10)
  errors[i, "n_pred"] <- 11-i
  errors[i, "training"] <- loss_func(class = alc$high_use, prob = alc$probability)
  errors[i, "prediction"] <- cv$delta[1]
}

g <- errors %>% gather(key,error, training, prediction) %>%ggplot(aes(x=n_pred, y=error, colour=key))
g + geom_line() + scale_x_discrete(limits=c(1:10), name = "Number of predictors")

Both errors decrease slightly when number of predictors increases, but the trend is not linear or even stable. Also the absolute amount does not change that much only from 26% to 22% in training error between 2 (the model used in th exercises) and 6 predictors. If I check the summary of that six predictor model, only studytime has statistical significance in addition to the 2 predictor model. Of course, it would be possible to continue with this exploration and maybe even find smaller training and prediction error. Then we would be close to machine learning and that’s a topic for another time.

m <- glm(m5, data = alc, family = "binomial")
summary(m)
## 
## Call:
## glm(formula = m5, family = "binomial", data = alc)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.6671  -0.7784  -0.5273   0.8741   2.4554  
## 
## Coefficients:
##               Estimate Std. Error z value Pr(>|z|)    
## (Intercept)    -3.1062     0.7143  -4.348 1.37e-05 ***
## goout           0.7234     0.1218   5.941 2.84e-09 ***
## traveltime      0.3829     0.1754   2.183 0.029070 *  
## freetime        0.1199     0.1370   0.875 0.381614    
## activitiesyes  -0.2689     0.2513  -1.070 0.284700    
## studytime      -0.5986     0.1717  -3.487 0.000488 ***
## paidyes         0.4775     0.2549   1.873 0.061055 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 465.68  on 381  degrees of freedom
## Residual deviance: 391.34  on 375  degrees of freedom
## AIC: 405.34
## 
## Number of Fisher Scoring iterations: 4

Clustering and classification

In this chapter I look into linear discriminant analysis and k-means clustering using ready made data set from the MASS-library. please also see bonus, tho not ready yet.

The Data

Access the required libraries

library(MASS); library(tidyr); library(ggplot2); library(corrplot)

First I load the Boston data set and explore it with str and summary.

data("Boston")
str(Boston)
## 'data.frame':    506 obs. of  14 variables:
##  $ crim   : num  0.00632 0.02731 0.02729 0.03237 0.06905 ...
##  $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
##  $ indus  : num  2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
##  $ chas   : int  0 0 0 0 0 0 0 0 0 0 ...
##  $ nox    : num  0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
##  $ rm     : num  6.58 6.42 7.18 7 7.15 ...
##  $ age    : num  65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
##  $ dis    : num  4.09 4.97 4.97 6.06 6.06 ...
##  $ rad    : int  1 2 2 3 3 3 5 5 5 5 ...
##  $ tax    : num  296 242 242 222 222 222 311 311 311 311 ...
##  $ ptratio: num  15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
##  $ black  : num  397 397 393 395 397 ...
##  $ lstat  : num  4.98 9.14 4.03 2.94 5.33 ...
##  $ medv   : num  24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
summary(Boston)
##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08204   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv      
##  Min.   : 1.73   Min.   : 5.00  
##  1st Qu.: 6.95   1st Qu.:17.02  
##  Median :11.36   Median :21.20  
##  Mean   :12.65   Mean   :22.53  
##  3rd Qu.:16.95   3rd Qu.:25.00  
##  Max.   :37.97   Max.   :50.00

The MASS package contains data sets to accompany book “Modern Applied Statistics with S” by W. N. Venables and B. D. Ripley with several distinct data from different sources. The Boston data set contains Housing values in suburbs of Boston. The set includes, for example, variables crim that is the per capita crime rate by town and age that is proportion of owner-occupied units built prior to 1940. For complete descriptions please see (Boston {MASS})[https://stat.ethz.ch/R-manual/R-devel/library/MASS/html/Boston.html].

Next, let’s plot distributions of the variables. For fun, let’s use color of the Faculty of Educational Sciences.

gather(Boston) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_bar(fill="#fcd116", color="#fcd116", alpha=0.9)

From the bar charts it is apparent that the data is not normally distributed and, in most cases, strongly skewed.

To asses different relationships of the variables let’s plot correlation matrix. I think lower triangular matrix looks better so let’s use type = "lower" instead upper as it was in the datacamp exercises.

cor_matrix <-cor (Boston) 
cor_matrix <- cor_matrix %>% round(2)
corrplot(cor_matrix, method="circle", type="lower", cl.pos = "b", tl.pos = "d", tl.cex = 0.6)

The per capita crime rate most strongly correlates positively with variables rad and tax that are index of accessibility to radial highways and full-value property-tax rate per $10,000 respectively. These two variables are also strongly correlated together. I can not reason for the connection between these variables ans the crime rate.

For the following analyzes we need to standardize the data I use the scale function. The function subtracts column mean from each value and then divides it with the standard deviation: \(scaled(x) = \frac{x-\bar{x}}{\mu_x}\)

We can see this effect with the summary.

boston_scaled <- scale(Boston)
summary(boston_scaled)
##       crim                 zn               indus              chas        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563   Min.   :-0.2723  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668   1st Qu.:-0.2723  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109   Median :-0.2723  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150   3rd Qu.:-0.2723  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202   Max.   : 3.6648  
##       nox                rm               age               dis         
##  Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331   Min.   :-1.2658  
##  1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366   1st Qu.:-0.8049  
##  Median :-0.1441   Median :-0.1084   Median : 0.3171   Median :-0.2790  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059   3rd Qu.: 0.6617  
##  Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164   Max.   : 3.9566  
##       rad               tax             ptratio            black        
##  Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047   Min.   :-3.9033  
##  1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876   1st Qu.: 0.2049  
##  Median :-0.5225   Median :-0.4642   Median : 0.2746   Median : 0.3808  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058   3rd Qu.: 0.4332  
##  Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372   Max.   : 0.4406  
##      lstat              medv        
##  Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 3.5453   Max.   : 2.9865

The scaling does not change the distribution of the data but scales all variable values around zero (mean).

The scaled data set is a matrix instead of dataframe, so let’s change it back. Replotting distributions, we see that the actual distributions do not change, but the values are

class(boston_scaled)
## [1] "matrix"
boston_scaled <- as.data.frame(boston_scaled)

Next, I create categorical variable of the crime rate by first dividing the data into four (quantile) bins and then cutting the data to those bins and giving each bin a label. Also, original crim variable is removed and the categorical variable added to the scaled data set.

bins <- quantile(boston_scaled$crim)
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, labels = c("low", "med_low", "med_high", "high"))
boston_scaled <- dplyr::select(boston_scaled, -crim)
boston_scaled <- data.frame(boston_scaled, crime)

Additionally, I divide the data set to train and test sets for future use. To select 80% of the data randomly as the train set, I use sample to randomly select n*0.8 indexes from the scaled data set. Then, I use those index to select the data. Using [-ind,] selects the rows not in [ind,]. Lastly, I save the correct crime classes form test set and remove them from that set.

n <- nrow(boston_scaled)
ind <- sample(n,  size = n * 0.8)
train <- boston_scaled[ind,]
test <- boston_scaled[-ind,]
correct_classes <- test$crime
test <- dplyr::select(test, -crime)

Linear discriminant analysis

More about LDA in this StatQuest-video

For the linear discriminant analysis I use the categorical crime rate created above as the target variable and all other variables as predictors. Data for the fit is the train set previously cut from the boston_scaled dataframe. Next, I draw the LDA plot.

palette(c("#fcd116","#00a6ed","#f6511d", "#7fb800")) # Custom color palette
lda.fit <- lda(crime ~., data = train)
classes <- as.numeric(train$crime)
plot(lda.fit, dimen = 2, col = classes, pch = classes)

Now, it is possible to predict crime rate categories using the LDA fit. Let’s check predictions from the test set agains the correct classes and cross tabulate the results.

lda.pred <- predict(lda.fit, newdata = test)
table(correct = correct_classes, predicted = lda.pred$class)
##           predicted
## correct    low med_low med_high high
##   low       14      12        0    0
##   med_low    5      19        4    0
##   med_high   0       8       14    0
##   high       0       0        1   25

The predictions are quite good as most values are on the diagonal. Only med_low gets several false positives.

Before k-means clustering, I standardize the orginial Boston data set and calculate euclidean and manhattan distance.

Boston <- scale(Boston)
Boston <- as.data.frame(Boston)
dist_eu <- dist(Boston)
summary(dist_eu)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.1343  3.4625  4.8241  4.9111  6.1863 14.3970
dist_man <- dist(Boston, method = "manhattan")
summary(dist_man)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.2662  8.4832 12.6090 13.5488 17.7568 48.8618

The euclidean distances sre clearly shorter as, I think, they should be.

Next I do k-means clustering with three centers and draw correlation plots, with data point colors corresponding with clusters, of a few intresting variables.

km <-kmeans(Boston, centers = 3)
pairs(Boston[,c(1,9, 10, 12, 13, 14)], col = km$cluster)

To determine optimal number of clusters I calculate and visualize within cluster sum of squares (WCSS) as function of clusters. As kmeans assigns initial clusters randomly, I set constant seed used in random number generator.

set.seed(123) # the seed for rng
k_max <- 10
twcss <- sapply(1:k_max, function(k){kmeans(Boston, k)$tot.withinss}) # these lines form datacamp exercise
qplot(x = 1:k_max, y = twcss, geom = 'line') + scale_x_continuous(breaks=c(1:10))

The twcss drops most drastically from one to two clusters meaning that two clusters is the optimal number of clusters.

km <-kmeans(Boston, centers = 2)
pairs(Boston[,c(1,9, 10, 12, 13, 14)], col = km$cluster)

Visually also two clusters looks better than three.

BONUS

Bonus

For this bonus assignment I use LDA fit for three clusters from k-means algorithm.

Boston <- dplyr::select(Boston, -chas) # need remove this variable as the lda.fit doesn't work with it when knitting
km <-kmeans(Boston, centers = 3)
lda.fit <- lda(km$cluster ~., data = Boston)

Visualization of clusters and arrows done with code from datacamp.

lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}
# target classes as numeric
classes <- as.numeric(km$cluster)
# plot the lda results
plot(lda.fit, dimen = 2, col = classes, pch = classes, )
lda.arrows(lda.fit, myscale = 3, color="black")

With three clusters it seem like variables age, zn, tax and nox are the most influential separators.

Super-Bonus

The code below for the (scaled) train data that you used to fit the LDA. The code creates a matrix product, which is a projection of the data points.

lda.fit <- lda(crime ~., data = train)
model_predictors <- dplyr::select(train, -crime)
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)
library(plotly)
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color = train$crime)


Some toughts